Skip to main content

Vectors: Part 1

Vectors have multiple definitions:

  • The mathematical definition; an element of a vector space,
  • The physical definition; a quantity that has both magnitude and direction,
  • The computer science definition; a one-dimensional array.

It's important to note that these definitions are not mutually exclusive. In fact, they are all related to each other, and can be thought of as different perspectives on the same concept.

A vector is often represented as an arrow in space, with a starting point and an ending point. The length of the arrow represents the magnitude of the vector, and the direction of the arrow represents the direction of the vector. Typically the start of the arrow is called the tail, and the end of the arrow is called the tip.

Table of Contents

Vector Notation and Real Coordinate Space

Vectors are often written as bold letters with an arrow on top, such as .

For instance, a vector can be represented as an arrow in the plane, starting at the origin and ending at the point .

Note the differences between vectors and points; vectors are not tied to a specific location in space, but rather represent a direction and magnitude.

You can think of it as an instruction from a GPS device that says "move north".

Real Coordinate Space

In two dimensions, the real coordinate space is denoted by . This is the set of all ordered pairs of real numbers, and can be visualized as the plane.

Essentially, means "all pairs of real numbers", which contains all vectors in two-dimensional space. You could say that , which means that the vector is an element of the real coordinate space and has two dimensions.

In three dimensions, the real coordinate space is instead denoted by .

Vector Spaces

A vector space is a set of vectors that is closed under addition and scalar multiplication.

What this means is that if you take two vectors from the set and add them together, the result is still in the set.

Similarly, if you take a vector from the set and multiply it by a scalar, the result is still in the set.

Examples of vector spaces include and .

The terms "vector space" and "real coordinate space" are often used interchangeably, but they are not exactly the same thing.

Zero Vector

In any vector space, there is a special vector known as the zero vector, denoted as .

This is simply a vector with all components equal to zero.

Magnitude

The magnitude of a vector is the length of the arrow representing the vector. It is also known as the norm or the length of the vector.

It's simple to calculate the magnitude of a vector in two dimensions.

Recall the Pythagorean theorem: . In this case, and , so the magnitude of the vector is:

Vector Addition

Vectors can be added together by adding their corresponding components. We will denote the sum of two vectors and as .

Vector addition can be shown both graphically and algebraically.

Let's try to add these two vectors:

Vector Addition: Graphical Representation

Remember that vectors can be represented as arrows in space. A vector or means "move 1 unit in the direction and 2 units in the direction".

To add two vectors graphically, you can place the tail of the second vector at the tip of the first vector, and then draw a new vector from the tail of the first vector to the tip of the second vector.

The new vector is then an arrow from the origin to the tip of the second vector. This new vector is the sum of the two vectors.

Vector Addition: Algebraic Representation

To add two vectors algebraically, you simply add their corresponding components.

Essentially, the -components are added together, and the -components are added together, and so on for higher dimensions.

Conditions

To add two vectors, they must have the same number of dimensions. You can't add a two-dimensional vector to a three-dimensional vector.

Commutative Property

One important property of vector addition is that it is commutative. This means that the order in which you add the vectors does not matter. This is important because some other vector operations are not commutative.

To show this, let's first add and , and then add and .

A visual representation of this is shown below:

Notice how adding and gives the same result as adding and . That is, they reach the same point in space.

This can be shown algebraically as well:

Since normal addition is commutative, the two results are equal.

Scalar Multiplication

Vectors can also be multiplied by a scalar, which is a single number. This operation is known as scalar multiplication.

Essentially, multiplying by a scalar scales the vector by that factor, which is why they're called scalars.

Scalar Multiplication: Graphical Representation

Graphically, scalar multiplication stretches or shrinks the vector by the scalar factor.

For example, consider the vector and the scalar .

The blue vector is the result of multiplying the red vector by 2. Notice how the blue vector is twice as long as the red vector.

We can also multiply by a scalar less than 1, which will shrink the vector.

Multiplying by a negative scalar will also flip the vector in the opposite direction.

Here's a playground where you can experiment with scalar multiplication:

Scalar Multiplication Playground

Scalar Multiplication: Algebraic Representation

To multiply a vector by a scalar, you simply multiply each component of the vector by the scalar.

Scalar multiplication can be applied to vectors of any dimension.

Vector Representations: Unit/Basis Vectors

There are multiple ways to represent vectors.

We've already seen the arrow representation, where vectors are represented as arrows in space, as well as the algebraic representation, where vectors are represented as ordered lists of numbers.

Another way to represent vectors is through the use of unit or basis vectors.

Unit Vectors

A unit vector is a vector with a magnitude of 1.

The unit vector for a given vector is typically denoted as .

To find the unit vector for a given vector, you simply divide the vector by its magnitude.

Since dividing doesn't change the direction of the vector, the unit vector will have the same direction as the original vector, while its magnitude will be 1.

Example Problem: Finding the Unit Vector

Find the unit vector for the vector .

To find the unit vector, we first need to calculate the magnitude of the vector:

Now we can find the unit vector by dividing the vector by its magnitude:

Basis Vectors: Special Unit Vectors

In some cases, it's useful to define special unit vectors that are aligned with the axes of the coordinate system.

For a two-dimensional space, these are typically denoted as and .

Imagine a vector . As we know, this means "move 3 units in the direction and 4 units in the direction".

You could imagine separating this vector into two components: one in the direction and one in the direction:

Now we have a representation of the vector as a sum of two scaled unit vectors: one in the direction and one in the direction.

We can represent any vector in this coordinate system by scaling these two unit vectors. It's a bit like how you can represent any color by mixing red, green, and blue.

Since it's so common, these unit vectors are often denoted as and .

This also makes it very easy to perform vector operations, as you can simply add the corresponding components of the vectors.

Parametric Representations of Lines

Vectors can also be used to represent lines in space. Recall that lines have both a slope and an intercept.

Let be a vector in . What we can do is to scale this vector by an arbitrary scalar .

Then, if we let vary over all real numbers, we get a line in space.

This is written as:

These vectors are co-linear, meaning they all lie on the same line. The vector can be thought of as the slope of the line.

But What About the Intercept?

In the above representation, we don't have a way to modify the intercept; it will always pass through the origin.

To shift the line to a different position, we can add another vector to the line.

Then, the line can be represented as:

You should notice that this is very similar to the equation of a line in slope-intercept form: .

Why Is This Useful?

You might be wondering why we would represent lines in this way, if we can already use the slope-intercept form.

This brings us to one of the most important advantages of using vectors: they generalize to higher dimensions.

The form only works in two dimensions, but the vector form works in any number of dimensions.

This makes it very easy to generalize concepts from two dimensions to three dimensions and beyond.

Example Problem: Identify the Line's Equation from a Point and Slope

Identify the equation of the line that passes through the point and has a slope of .

To find the equation of the line, we first need to find the vector that represents the slope.

Since the slope is , the vector is .

Next, we need to transform this vector to pass through the point . One simple way is to simply shift the origin to , which gives us the vector .

Hence, the equation of the line is:

Example Problem: Identify the Line's Equation from Two Points

Identify the equation of the line that passes through these two points:

To find the equation of the line, we first need to find the vector that represents the slope.

We can visualize the slope by drawing the two vectors:

You can see that we can find the slope through a vector on the two points.

Hence, the vector representing the slope is .

To find the equation of the line, we need to find the vector that represents the intercept.

We can find the intercept by setting the origin to one of the points.

Therefore, we have two simple solutions:

We can plug in the values to get the final equation:

Deriving Parametric Equations from a Line Equation

Given a line equation in the form , we can derive the parametric equations for the line.

Let's use the same line equation as before:

To derive the parametric equations, we can simply use each component of the vector as a separate equation:

Linear Combinations

A linear combination is a combination of vectors in which each vector is multiplied by a scalar and then added together.

Let us consider a simple example to understand linear combinations better.

Let be vectors in .

Let be real scalars. All a linear combination is, is the sum of the vectors multiplied by the scalars:

The reason it's called "linear" is because the scalars are multiplied to the vectors, and then added together. We aren't multiplying vectors by vectors, or taking any exponents or anything like that.

Example Problem: Determining Coefficients

Let and be vectors in .

Let be an arbitrary vector in . Express as a linear combination of and .

To express as a linear combination of and , we need to find scalars and such that:

We can do this for each component of the vectors:

Span of Linear Combinations

The reason linear combinations are important is because they help us define the span of a set of vectors.

Let's say we have two vectors:

Consider the linear combination:

Since and can be any real numbers, this linear combination can become any vector in .

This is written mathematically as:

Formally, span can be defined as the set of all possible linear combinations of a set of vectors:

Exception: Collinear Vectors

If the vectors are collinear, meaning they lie on the same line, then the span of the vectors is the line that they lie on.

(Think of it visually.)

Exception: Zero Vectors

Another exception is the zero vector. Since the zero vector is just the origin, the span of the zero vector is just the origin.

Linear Dependence and Independence

Linear combinations are also used to determine if a set of vectors is linearly dependent or independent.

A set of vectors is linearly dependent if one of the vectors can be written as a linear combination of the others.

For example, consider the vectors:

The third vector can be written as a linear combination of the first two vectors:

Another way to think of it is, you can make a combination of the three vectors that equals the zero vector:

If you can find a set of scalars that aren't all zero, then the vectors are linearly dependent.

If you can't find a set of scalars that aren't all zero, then the vectors are linearly independent.

Implications of Linear Dependence

If a set of vectors is linearly dependent, then it means that one of the vectors can be written as a linear combination of the others.

This means that one of the vectors is redundant, and you can remove it without losing any information.

For example, consider some GPS software that says "Go 3 miles north, then 4 miles south."

This is like saying "Go , then ."

The two vectors are linearly dependent because you can linearly combine them to get . This means we can describe the north vector in terms of the south vector, or vice versa. Therefore, we can remove one of them without losing any information.

The GPS can instead simply say "Go 1 mile south."

Consider another instruction: "Go 2 miles north, then 2 miles east."

This is like saying "Go , then ."

These two vectors are linearly independent because you can't combine them to get .

We cannot describe the north vector in terms of the east vector, or vice versa.

This means that both vectors are necessary to describe the movement.

The GPS can also say "Go 2.8 miles northeast." This would be a linear combination of the two vectors.

Linear Dependence to Determine Span

If you have a set of vectors and you want to determine if they span a space, you can check if they are linearly independent.

If they are linearly independent, then they span , and if not, they don't.

Proof

Let be vectors in the vector space .

Assume that . This means that there exists a vector in that is not in .

Consider the set . If is linearly dependent, then can be written as a linear combination of the other vectors in , which is not possible because is not in . Therefore, must be linearly independent.

However, has vectors, which is more than the dimension of . This contradicts the fact that is an -dimensional vector space.

Therefore, the assumption that leads to a contradition, so it is false, and hence, , proven by contradiction.

Linear Dependence of the Basis Vectors

The basis vectors of a space are always linearly independent.

Consider the basis vectors for 2-dimensional Cartesian space: and . and are linearly independent because you can't write one in terms of the other. Therefore, both are needed to describe all of .

Example Problem: Linear Dependence of Two Vectors

Let and be vectors in . Determine if the vectors are linearly dependent or independent.

To determine if the vectors are linearly dependent, we need to see if there's a way to combine them to get the zero vector:

This is a system of linear equations that can be solved to find and :

Recall that for linear dependence, the coefficients must not all be zero. Since and , the vectors are linearly independent.

Example Problem: Linear Dependence of Three Vectors

Let , , and be vectors in . Determine if the vectors are linearly dependent or independent.

To determine if the vectors are linearly dependent, we need to see if there's a way to combine them to get the zero vector:

Since we want to find a set of scalars that aren't all zero, we can pick a random value, like :

Now we can substitute and back into the second equation:

We found three scalars that aren't all zero yet satisfy the equation:

Therefore, the vectors are linearly dependent.

Alternatively, let's consider a logical approach.

For 3 vectors in , in the best case scenario, two of the vectors are linearly independent. This means that they cover the entire space, and the third vector, which is in the space, can be written as a linear combination of the other two, and so, is redundant.

Linear Subspaces

Recall that is the set of all -dimensional vectors. You could visualize it as a -dimensional space, but we'll use the most abstract definition for now.

can be defined as:

Let be a subset of . In order for to be a linear subspace of , it must satisfy the following conditions:

  1. must contain the zero vector.
  2. must be closed under addition, so if and are in , then must also be in .
  3. must be closed under scalar multiplication, so if is in and is a scalar, then must also be in .

Example Problem: Determining if a Set of the Zero Vector is a Linear Subspace

Let . Is a linear subspace of ?

Let's check the conditions:

  1. contains the zero vector, so this condition is satisfied.
  2. is closed under addition. The only possible addition is .
  3. is closed under scalar multiplication. Anything multiplied by the zero vector is the zero vector.

Therefore, is a linear subspace of .

Example Problem: Determining if a Set of Two Quadrants is a Linear Subspace

Let . Is a linear subspace of ?

This set contains all vectors in the first and fourth quadrants. It visually looks like the right half of the plane.

Once again, let's check the conditions:

  1. contains the zero vector, so this condition is satisfied.

  2. is closed under addition:

    Since and are both non-negative, is also non-negative, so the sum is in .

  3. is not closed under scalar multiplication. You could multiply by a negative scalar and get a vector in the other quadrant:

    Since , is negative, so the scalar multiplication is not in .

Therefore, is not a linear subspace of .

Example Problem: Spans as Linear Subspaces

Let be the span of three vectors in . Is a linear subspace of ?

The span of the vectors is the set of all possible linear combinations of the vectors.

We can let be a vector in the span:

Let's check the conditions:

  1. The span contains the zero vector, since we can set all the coefficients to zero:

  2. To determine if the span is closed under addition, let be another vector in the span:

    The sum of the two vectors is also in the span:

  3. The span is closed under scalar multiplication, since the scalar multiple of a linear combination is also a linear combination:

Therefore, the span of the vectors is a linear subspace of .

In fact, the span of any set of vectors is always a linear subspace.

Defining a Linear Subspace with a Basis

A linear subspace can be defined by the span of a set of linearly independent vectors.

Let be a set of linearly independent vectors in .

The linear subspace defined by is:

Recall that for a set of vectors to be linearly independent, the only solution to the equation is .

If all of this is true, then is a basis for . More formally, the basis of a linear subspace is a "minimum" set of vectors that can span the space.

Example Problem: Determining the Span of a Set of Vectors

Let be a set of vectors in . Determine the span of .

To determine the span of the set, we need to find all possible linear combinations of the vectors. Let be a vector in the span. Then, for some scalars and :

This gives us a system of linear equations:

Substituting into the first equation:

So, for any values of and , you can find and that satisfy the equation.

Therefore, and can be any real numbers, and the span of the set is .

Summary

We have covered a lot of the fundamental concepts of linear algebra in this lesson:

  • Vectors are quantities that have both magnitude and direction.

  • Vectors can be added together and multiplied by scalars. These operations are commutative, associative, and distributive.

  • Vectors can be represented as arrows in space, with the tail at the origin.

  • Vectors have the notation , and their components can be written as (for 2D vectors).

  • The magnitude of a vector can be calculated using the Pythagorean theorem in Cartesian space:

  • Unit vectors are vectors with a magnitude of 1.

  • The zero vector is the vector with all components equal to zero.

  • Linear combinations are combinations of vectors in which each vector is multiplied by a scalar and then added together:

  • The span of a set of vectors is the set of all possible linear combinations of the vectors.

  • Linear subspaces are sets of vectors that contain the zero vector, are closed under addition, and are closed under scalar multiplication.

  • The basis of a linear subspace is a set of linearly independent vectors that span the space.

  • The span of a set of vectors is always a linear subspace.

Next, we will consider how vectors can be multiplied together using the dot product and the cross product, and apply them to solve some problems.